24 research outputs found

    Algorithmic fairness and structural injustice:Insights from Feminist Political Philosophy

    Get PDF
    Data-driven predictive algorithms are widely used to automate and guide high-stake decision making such as bail and parole recommendation, medical resource distribution, and mortgage allocation. Nevertheless, harmful outcomes biased against vulnerable groups have been reported. The growing research field known as 'algorithmic fairness' aims to mitigate these harmful biases. Its primary methodology consists in proposing mathematical metrics to address the social harms resulting from an algorithm's biased outputs. The metrics are typically motivated by -- or substantively rooted in -- ideals of distributive justice, as formulated by political and legal philosophers. The perspectives of feminist political philosophers on social justice, by contrast, have been largely neglected. Some feminist philosophers have criticized the local scope of the paradigm of distributive justice and have proposed corrective amendments to surmount its limitations. The present paper brings some key insights of feminist political philosophy to algorithmic fairness. The paper has three goals. First, I show that algorithmic fairness does not accommodate structural injustices in its current scope. Second, I defend the relevance of structural injustices -- as pioneered in the contemporary philosophical literature by Iris Marion Young -- to algorithmic fairness. Third, I take some steps in developing the paradigm of 'responsible algorithmic fairness' to correct for errors in the current scope and implementation of algorithmic fairness. I close by some reflections of directions for future research

    Algorithmic Fairness and Structural Injustice: Insights from Feminist Political Philosophy

    Get PDF
    Data-driven predictive algorithms are widely used to automate and guide high-stake decision making such as bail and parole recommendation, medical resource distribution, and mortgage allocation. Nevertheless, harmful outcomes biased against vulnerable groups have been reported. The growing research field known as 'algorithmic fairness' aims to mitigate these harmful biases. Its primary methodology consists in proposing mathematical metrics to address the social harms resulting from an algorithm's biased outputs. The metrics are typically motivated by -- or substantively rooted in -- ideals of distributive justice, as formulated by political and legal philosophers. The perspectives of feminist political philosophers on social justice, by contrast, have been largely neglected. Some feminist philosophers have criticized the local scope of the paradigm of distributive justice and have proposed corrective amendments to surmount its limitations. The present paper brings some key insights of feminist political philosophy to algorithmic fairness. The paper has three goals. First, I show that algorithmic fairness does not accommodate structural injustices in its current scope. Second, I defend the relevance of structural injustices -- as pioneered in the contemporary philosophical literature by Iris Marion Young -- to algorithmic fairness. Third, I take some steps in developing the paradigm of 'responsible algorithmic fairness' to correct for errors in the current scope and implementation of algorithmic fairness. I close by some reflections of directions for future research

    In conversation with Artificial Intelligence: aligning language models with human values

    Get PDF
    Large-scale language technologies are increasingly used in various forms of communication with humans across different contexts. One particular use case for these technologies is conversational agents, which output natural language text in response to prompts and queries. This mode of engagement raises a number of social and ethical questions. For example, what does it mean to align conversational agents with human norms or values? Which norms or values should they be aligned with? And how can this be accomplished? In this paper, we propose a number of steps that help answer these questions. We start by developing a philosophical analysis of the building blocks of linguistic communication between conversational agents and human interlocutors. We then use this analysis to identify and formulate ideal norms of conversation that can govern successful linguistic communication between humans and conversational agents. Furthermore, we explore how these norms can be used to align conversational agents with human values across a range of different discursive domains. We conclude by discussing the practical implications of our proposal for the design of conversational agents that are aligned with these norms and values

    Reconciling Governmental Use of Online Targeting With Democracy

    Full text link
    The societal and epistemological implications of online targeted advertising have been scrutinized by AI ethicists, legal scholars, and policymakers alike. However, the government's use of online targeting and its consequential socio-political ramifications remain under-explored from a critical socio-technical standpoint. This paper investigates the socio-political implications of governmental online targeting, using a case study of the UK government's application of such techniques for public policy objectives. We argue that this practice undermines democratic ideals, as it engenders three primary concerns -- Transparency, Privacy, and Equality -- that clash with fundamental democratic doctrines and values. To address these concerns, the paper introduces a preliminary blueprint for an AI governance framework that harmonizes governmental use of online targeting with certain democratic principles. Furthermore, we advocate for the creation of an independent, non-governmental regulatory body responsible for overseeing the process and monitoring the government's use of online targeting, a critical measure for preserving democratic values.Comment: Accepted for publication in 2023 ACM Conference on Fairness, Accountability, and Transparency (FAccT 2023

    In conversation with Artificial Intelligence: aligning language models with human values

    Get PDF
    Large-scale language technologies are increasingly used in various forms of communication with humans across different contexts. One particular use case for these technologies is conversational agents, which output natural language text in response to prompts and queries. This mode of engagement raises a number of social and ethical questions. For example, what does it mean to align conversational agents with human norms or values? Which norms or values should they be aligned with? And how can this be accomplished? In this paper, we propose a number of steps that help answer these questions. We start by developing a philosophical analysis of the building blocks of linguistic communication between conversational agents and human interlocutors. We then use this analysis to identify and formulate ideal norms of conversation that can govern successful linguistic communication between humans and conversational agents. Furthermore, we explore how these norms can be used to align conversational agents with human values across a range of different discursive domains. We conclude by discussing the practical implications of our proposal for the design of conversational agents that are aligned with these norms and values

    Counter Countermathematical Explanations

    Get PDF
    Recently, there have been several attempts to generalize the counterfactual theory of causal explanations to mathematical explanations. The central idea of these attempts is to use conditionals whose antecedents express a mathematical impossibility. Such countermathematical conditionals are plugged into the explanatory scheme of the counterfactual theory and -- so is the hope -- capture mathematical explanations. Here, I dash the hope that countermathematical explanations simply parallel counterfactual explanations. In particular, I show that explanations based on countermathematicals are susceptible to three problems counterfactual explanations do not face. These problems seriously challenge the prospects for a counterfactual theory of explanation that is meant to cover mathematical explanations

    In conversation with Artificial Intelligence: aligning language models with human values

    Get PDF
    Large-scale language technologies are increasingly used in various forms of communication with humans across different contexts. One particular use case for these technologies is conversational agents, which output natural language text in response to prompts and queries. This mode of engagement raises a number of social and ethical questions. For example, what does it mean to align conversational agents with human norms or values? Which norms or values should they be aligned with? And how can this be accomplished? In this paper, we propose a number of steps that help answer these questions. We start by developing a philosophical analysis of the building blocks of linguistic communication between conversational agents and human interlocutors. We then use this analysis to identify and formulate ideal norms of conversation that can govern successful linguistic communication between humans and conversational agents. Furthermore, we explore how these norms can be used to align conversational agents with human values across a range of different discursive domains. We conclude by discussing the practical implications of our proposal for the design of conversational agents that are aligned with these norms and values
    corecore